237 research outputs found

    Faster Isomorphism for pp-Groups of Class 2 and Exponent pp

    Full text link
    The group isomorphism problem determines whether two groups, given by their Cayley tables, are isomorphic. For groups with order nn, an algorithm with n(log⁑n+O(1))n^{(\log n + O(1))} running time, attributed to Tarjan, was proposed in the 1970s [Mil78]. Despite the extensive study over the past decades, the current best group isomorphism algorithm has an n(1/4+o(1))log⁑nn^{(1 / 4 + o(1))\log n} running time [Ros13]. The isomorphism testing for pp-groups of (nilpotent) class 2 and exponent pp has been identified as a major barrier to obtaining an no(log⁑n)n^{o(\log n)} time algorithm for the group isomorphism problem. Although the pp-groups of class 2 and exponent pp have much simpler algebraic structures than general groups, the best-known isomorphism testing algorithm for this group class also has an nO(log⁑n)n^{O(\log n)} running time. In this paper, we present an isomorphism testing algorithm for pp-groups of class 2 and exponent pp with running time nO((log⁑n)5/6)n^{O((\log n)^{5/6})} for any prime p>2p > 2. Our result is based on a novel reduction to the skew-symmetric matrix tuple isometry problem [IQ19]. To obtain the reduction, we develop several tools for matrix space analysis, including a matrix space individualization-refinement method and a characterization of the low rank matrix spaces.Comment: Accepted to STOC 202

    Learning mixtures of structured distributions over discrete domains

    Full text link
    Let C\mathfrak{C} be a class of probability distributions over the discrete domain [n]={1,...,n}.[n] = \{1,...,n\}. We show that if C\mathfrak{C} satisfies a rather general condition -- essentially, that each distribution in C\mathfrak{C} can be well-approximated by a variable-width histogram with few bins -- then there is a highly efficient (both in terms of running time and sample complexity) algorithm that can learn any mixture of kk unknown distributions from C.\mathfrak{C}. We analyze several natural types of distributions over [n][n], including log-concave, monotone hazard rate and unimodal distributions, and show that they have the required structural property of being well-approximated by a histogram with few bins. Applying our general algorithm, we obtain near-optimally efficient algorithms for all these mixture learning problems.Comment: preliminary full version of soda'13 pape

    A composition theorem for parity kill number

    Full text link
    In this work, we study the parity complexity measures Cminβ‘βŠ•[f]{\mathsf{C}^{\oplus}_{\min}}[f] and DTβŠ•[f]{\mathsf{DT^{\oplus}}}[f]. Cminβ‘βŠ•[f]{\mathsf{C}^{\oplus}_{\min}}[f] is the \emph{parity kill number} of ff, the fewest number of parities on the input variables one has to fix in order to "kill" ff, i.e. to make it constant. DTβŠ•[f]{\mathsf{DT^{\oplus}}}[f] is the depth of the shortest \emph{parity decision tree} which computes ff. These complexity measures have in recent years become increasingly important in the fields of communication complexity \cite{ZS09, MO09, ZS10, TWXZ13} and pseudorandomness \cite{BK12, Sha11, CT13}. Our main result is a composition theorem for Cminβ‘βŠ•{\mathsf{C}^{\oplus}_{\min}}. The kk-th power of ff, denoted f∘kf^{\circ k}, is the function which results from composing ff with itself kk times. We prove that if ff is not a parity function, then Cminβ‘βŠ•[f∘k]β‰₯Ξ©(Cmin⁑[f]k).{\mathsf{C}^{\oplus}_{\min}}[f^{\circ k}] \geq \Omega({\mathsf{C}_{\min}}[f]^{k}). In other words, the parity kill number of ff is essentially supermultiplicative in the \emph{normal} kill number of ff (also known as the minimum certificate complexity). As an application of our composition theorem, we show lower bounds on the parity complexity measures of Sort∘k\mathsf{Sort}^{\circ k} and HI∘k\mathsf{HI}^{\circ k}. Here Sort\mathsf{Sort} is the sort function due to Ambainis \cite{Amb06}, and HI\mathsf{HI} is Kushilevitz's hemi-icosahedron function \cite{NW95}. In doing so, we disprove a conjecture of Montanaro and Osborne \cite{MO09} which had applications to communication complexity and computational learning theory. In addition, we give new lower bounds for conjectures of \cite{MO09,ZS10} and \cite{TWXZ13}

    Dynamic Kernel Sparsifiers

    Full text link
    A geometric graph associated with a set of points P={x1,x2,⋯ ,xn}βŠ‚RdP= \{x_1, x_2, \cdots, x_n \} \subset \mathbb{R}^d and a fixed kernel function K:RdΓ—Rdβ†’Rβ‰₯0\mathsf{K}:\mathbb{R}^d\times \mathbb{R}^d\to\mathbb{R}_{\geq 0} is a complete graph on PP such that the weight of edge (xi,xj)(x_i, x_j) is K(xi,xj)\mathsf{K}(x_i, x_j). We present a fully-dynamic data structure that maintains a spectral sparsifier of a geometric graph under updates that change the locations of points in PP one at a time. The update time of our data structure is no(1)n^{o(1)} with high probability, and the initialization time is n1+o(1)n^{1+o(1)}. Under certain assumption, we can provide a fully dynamic spectral sparsifier with the robostness to adaptive adversary. We further show that, for the Laplacian matrices of these geometric graphs, it is possible to maintain random sketches for the results of matrix vector multiplication and inverse-matrix vector multiplication in no(1)n^{o(1)} time, under updates that change the locations of points in PP or change the query vector by a sparse difference

    Vehicle Type Recognition Combining Global and Local Features via Two-Stage Classification

    Get PDF
    This study proposes a new vehicle type recognition method that combines global and local features via a two-stage classification. To extract the continuous and complete global feature, an improved Canny edge detection algorithm with smooth filtering and non-maxima suppression abilities is proposed. To extract the local feature from four partitioned key patches, a set of Gabor wavelet kernels with five scales and eight orientations is introduced. Different from the single-stage classification, where all features are incorporated into one classifier simultaneously, the proposed two-stage classification strategy leverages two types of features and classifiers. In the first stage, the preliminary recognition of large vehicle or small vehicle is conducted based on the global feature via a k-nearest neighbor probability classifier. Based on the preliminary result, the specific recognition of bus, truck, van, or sedan is achieved based on the local feature via a discriminative sparse representation based classifier. We experiment with the proposed method on the public and established datasets involving various challenging cases, such as partial occlusion, poor illumination, and scale variation. Experimental results show that the proposed method outperforms existing state-of-the-art methods
    • …
    corecore